Hello everyone and welcome back to computer vision lecture series. This is lecture 5 part
1. We will continue talking about feature detectors. In the last lecture we talked about
Harris-Cornan detector. It is one of the many types of feature detectors. Let us continue
then in that direction. In this fifth lecture majorly we will cover three main issues. As
we saw in the last lecture that there was an issue of choosing the right scale because
we saw how Harris-Cornan detector is not covariant to scale and therefore, in order to find the
same corner in a scaled version of a different image, we need to adopt different techniques
and we are going to look into them in this part of the lecture. In the next part we will
also talk about a specific scale invariant image descriptor which gained a lot of popularity
and one of the strongest feature detector of its time. And then we will also do some
feature matching. So, let us take an overview of what we have looked into so far. In local
features we have seen mainly we have studied about feature detector called Harris-Cornan
detector. We also saw the properties and the features feature detector is required to have.
And in the next following lecture we will also look into description and matching but
for now we continue with feature detectors. We look into scale invariant feature detectors
to be specific. So, how can so the question comes to the mind and I want to pose it to
you is how can a scale of a feature point be modeled? How do we recognize that when
we are using a particular modeling technique or an algorithm how does it take into account
the different scales a particular feature can have in multiple images. So, let us say
you are taking a photograph of an object and then you are going moving a little and or
maybe zooming in and then taking the same image of the same object but in a scaled or
zoomed in version and then you want to detect features at different scales. So, scales pose
the first interesting problem for the current analysis of our feature detectors right. So,
with Harris-Cornan detector we had not we saw that it is a scale it is not scale covariant
and therefore we look into automatic scale selection techniques. So, for example in this
case on the left we have an image and this is a feature point here and a scale version
of the same image is shown here with the same feature point. We need to find basically a
function which can map these two feature points in these two different sets of images. How
do we find f? So, what kind of function or a mapping function that we are on response
function that we are looking into that gives us this capability. Basically what we need
is when we look at a response of the function f it looks like this in on the left hand side
of the image and on the scale version we see that it is like this. So, it is not clear
which part of the response corresponds to both the images. So, we do not know we are
not sure on which feature is the response matching right. So, on the x axis here we
have different scales on which the function f is applied same in both the cases, but we
have no easy way of knowing where the features match. However, but we know in a way that
when the when we are using different scales intuitively we know that when the function
has the high peak that is where the matching happens or that is where the features are
located the function responses are similar. And so basically we want to find these locations
in different scales of the same function that can help us detect this particular scale of
this function as well as which feature points are matched across images.
So what could be a very good useful signature of this function? So, this is these are simple
again we go back to Gaussians as we saw earlier as well. The single Gaussian the response
is like this the first derivative is like this and the second derivative also called
Laplacian of Gaussian is given in this manner which is shown here. We take up a function
f which is a Gaussian function with different scales and their responses in different scales
is shown in this graph. Basically when you increase the scale size or when you increase
the variance of this Gaussian kernel the responses are different in this scales and when you
see that you increase the scale space the responses are delayed as well. Basically they
have a bigger window size for the response.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:19:18 Min
Aufnahmedatum
2021-04-26
Hochgeladen am
2021-04-26 11:27:26
Sprache
en-US